Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

change staticRNN to while #48213

Merged

Conversation

phlrain
Copy link
Collaborator

@phlrain phlrain commented Nov 21, 2022

PR types

Others

PR changes

Others

Describe

我们计划移除recurrent op 和 StaticRNN,将paddle.nn.RNN中依赖的staticRNN替换为while

@paddle-bot
Copy link

paddle-bot bot commented Nov 21, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Comment on lines +443 to +458
auto shape = var_desc->GetShape();
VLOG(8) << "Found uninitialized tensor " << outside_og_name
<< " in step 0, fill it with 0.0f. dims="
<< phi::make_ddim(shape);
framework::AttributeMap attrs;
attrs["dtype"] = var_desc->GetDataType();
attrs["shape"] = phi::vectorize<int>(phi::make_ddim(shape));
attrs["value"] = 0.0f;

auto var_name = outside_og_name;
auto zero_op =
framework::OpRegistry::CreateOp("fill_constant",
framework::VariableNameMap{},
{{"Out", {var_name}}},
attrs);
zero_op->Run(scope, dev_place);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to use SetConstant or Full API directly.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个逻辑上可以改成full_like; 但是

  1. while op内部没有数据类型的T,需要使用visitor
  2. full like的api的dense tensor是返回值, 要摁到scope里面,还需要一些操作
    这种方式改完,逻辑比当前还要复杂

auto &og_outside = *scope.FindVar(outside_og_name);
auto &og_inside = *cur_scope.Var(inside_og_name);
if (og_outside.IsType<phi::DenseTensor>()) {
auto &outside_tensor = og_outside.Get<phi::DenseTensor>();
auto &inside_tensor = *og_inside.GetMutable<phi::DenseTensor>();
inside_tensor.set_lod(outside_tensor.lod());
inside_tensor.ShareDataWith(outside_tensor);
if (outside_tensor.IsInitialized()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this line needed after set zero added?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shoule be removed here

auto is_var_input_and_output =
std::find(outside_og_names.begin(),
outside_og_names.end(),
pg_ig_names[param_id]) != outside_og_names.end();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

原来是获取一个iterator,现在是把iterator的判断提前

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

2742195759
2742195759 previously approved these changes Dec 12, 2022
Copy link
Contributor

@2742195759 2742195759 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

zhiqiu
zhiqiu previously approved these changes Dec 12, 2022
Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@phlrain phlrain dismissed stale reviews from zhiqiu and 2742195759 via 24f24ca December 13, 2022 14:41
@phlrain phlrain closed this Dec 14, 2022
@phlrain phlrain reopened this Dec 14, 2022
zhiqiu
zhiqiu previously approved these changes Dec 14, 2022
Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@PaddlePaddle PaddlePaddle locked as too heated and limited conversation to collaborators Dec 15, 2022
@PaddlePaddle PaddlePaddle unlocked this conversation Dec 15, 2022
@phlrain phlrain merged commit 6953689 into PaddlePaddle:develop Dec 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants